GitHub header
In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.
May 29, 2025 - 21:30 UTC
Scheduled - Codespaces will be undergoing global maintenance from May 29, 2025 21:30 UTC to May 31, 2025 4:30 UTC. Maintenance will begin in our Europe, Asia, and Australia regions. Once it is complete, maintenance will start in our US regions. Each batch of regions will take approximately three to four hours to complete.

During this time period, users may experience intermittent connectivity issues when creating new Codespaces or accessing existing ones.

To avoid disruptions, ensure that any uncommitted changes are committed and pushed before the maintenance starts. Codespaces with uncommitted changes will remain accessible as usual after the maintenance is complete.

May 29, 2025 21:30 - May 31, 2025 04:30 UTC

About This Site

For the status of GitHub Enterprise Cloud - EU, please visit: eu.githubstatus.com
For the status of GitHub Enterprise Cloud - Australia, please visit: au.githubstatus.com
For the status of GitHub Enterprise Cloud - US, please visit: us.githubstatus.com

Git Operations ? Operational
Webhooks ? Operational
Visit www.githubstatus.com for more information Operational
API Requests ? Operational
Issues ? Operational
Pull Requests ? Operational
Actions ? Operational
Packages ? Operational
Pages ? Operational
Codespaces ? Under Maintenance
Copilot Operational
Operational
Degraded Performance
Partial Outage
Major Outage
Maintenance
May 30, 2025

No incidents reported today.

May 29, 2025
Completed - The scheduled maintenance has been completed.
May 29, 16:30 UTC
In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.
May 28, 16:30 UTC
Scheduled - Codespaces will be undergoing global maintenance from 16:30 UTC on Wednesday, May 28 to 16:30 UTC on Thursday, May 29. Maintenance will begin in our Europe, Asia, and Australia regions. Once it is complete, maintenance will start in our US regions. Each batch of regions will take approximately three to four hours to complete.

During this time period, users may experience intermittent connectivity issues when creating new Codespaces or accessing existing ones.

To avoid disruptions, ensure that any uncommitted changes are committed and pushed before the maintenance starts. Codespaces with uncommitted changes will remain accessible as usual after the maintenance is complete.

May 22, 15:26 UTC
May 28, 2025
Resolved - On May 28, 2025, from approximately 09:45 UTC to 14:45 UTC, GitHub Actions experienced delayed job starts for workflows in public repos using Ubuntu-24 standard hosted runners. This was caused by a misconfiguration in backend caching behavior after a failover, which led to duplicate job assignments and reduced available capacity. Approximately 19.7% of Ubuntu-24 hosted runner jobs on public repos were delayed. Other hosted runners, self-hosted runners, and private repo workflows were unaffected.

By 12:45 UTC, we mitigated the issue by redeploying backend components to reset state and scaling up available resources to more quickly work through the backlog of queued jobs. We are working to improve our deployment and failover resiliency and validation to reduce the likelihood of similar issues in the future.

May 28, 14:43 UTC
Update - We are continuing to monitor the affected Actions runners to ensure a smooth recovery.
May 28, 14:35 UTC
Update - We are observing indications of recovery with the affected Actions runners.

The team will continue monitoring systems to ensure a return to normal service.

May 28, 13:42 UTC
Update - We're continuing to investigate delays in Actions runners for hosted Ubuntu 24.

We will provide further updates as more information becomes available.

May 28, 12:41 UTC
Update - Actions is experiencing degraded performance. We are continuing to investigate.
May 28, 11:49 UTC
Update - Actions is experiencing high wait times for obtaining standard hosted runners for ubuntu 24. Other hosted labels and self-hosted runners are not impacted.
May 28, 11:42 UTC
Investigating - We are currently investigating this issue.
May 28, 11:11 UTC
May 27, 2025
Resolved - On May 27, 2025, between 09:31 UTC and 13:31 UTC, some Actions jobs experienced failures uploading to and downloading from the Actions Cache service. During the incident, 6% of all workflow runs couldn’t upload or download cache entries from the service, resulting in a non-blocking warning message in the logs and performance degradation. The disruption was caused by an infrastructure update related to the retirement of a legacy service, which unintentionally impacted Cache service availability. We resolved the incident by reverting the change and have since implemented a permanent fix to prevent recurrence.

We are improving our configuration change processes by introducing additional end-to-end tests to cover the identified gaps, and implementing deployment pipeline improvements to reduce mitigation time for similar issues in the future.

May 27, 13:31 UTC
Update - Mitigation is applied and we’re seeing signs of recovery. We’re monitoring the situation until the mitigation is applied to all affected repositories.
May 27, 13:03 UTC
Update - We are experiencing degradation with the GitHub Actions cache service and are working on applying the appropriate mitigations.
May 27, 12:27 UTC
Investigating - We are investigating reports of degraded performance for Actions
May 27, 12:26 UTC
Resolved - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.
May 27, 12:41 UTC
Investigating - We are currently investigating this issue.
May 27, 12:20 UTC
Resolved - Between 10:00 and 20:00 UTC on May 27, a change to our git proxy service resulted in some git client implementations not being able to consistently push to GitHub. Reverting the change resulted in an immediate resolution of the problem for all customers. The inflated time to detect this failure was due to the relatively few impacted clients. We are re-evaluating the proposed change to understand how we can
prevent and detect such failures in the future.

May 27, 10:00 UTC
May 26, 2025
Resolved - On May 26, 2025, between 06:20 UTC and 09:45 UTC GitHub experienced broad failures across a variety of services (API, Issues, Git, etc). These were degraded at times, but peaked at 100% failure rates for some operations during this time.

On May 23, a new feature was added to Copilot APIs and monitored during rollout but it was not tested at peak load. At 6:20 UTC on May 26, load increased on the code path in question and started to degrade a Copilot API because the caching for this endpoint and circuit breakers for high load were misconfigured.

In addition, the traffic limiting meant to protect wider swaths of the GitHub API from queuing was not yet covering this endpoint, meaning it was able to overwhelm the capacity to serve traffic and cause request queuing.

We were able to mitigate the incident by turning off the endpoint until the behavior could be reverted.

We are already working on a quality of service strategy for API endpoints like this that will limit the impact of a broad incident and are rolling it out. We are also addressing the specific caching and circuit breaker misconfigurations for this endpoint, which would have reduced the time to mitigate this particular incident and the blast radius.

May 26, 10:17 UTC
Update - We continue to see signs of recovery.
May 26, 10:09 UTC
Update - Issues is operating normally.
May 26, 09:51 UTC
Update - Git Operations is operating normally.
May 26, 09:46 UTC
Update - API Requests is operating normally.
May 26, 09:44 UTC
Update - Copilot is operating normally.
May 26, 09:43 UTC
Update - Packages is operating normally.
May 26, 09:43 UTC
Update - Actions is operating normally.
May 26, 09:42 UTC
Update - Packages is experiencing degraded performance. We are continuing to investigate.
May 26, 08:39 UTC
Update - Copilot is experiencing degraded performance. We are continuing to investigate.
May 26, 08:26 UTC
Update - Actions is experiencing degraded performance. We are continuing to investigate.
May 26, 08:25 UTC
Update - We are continuing to investigate degraded performance.
May 26, 07:53 UTC
Update - Issues is experiencing degraded performance. We are continuing to investigate.
May 26, 07:35 UTC
Investigating - We are investigating reports of degraded performance for API Requests and Git Operations
May 26, 07:21 UTC
May 25, 2025

No incidents reported.

May 24, 2025

No incidents reported.

May 23, 2025
Resolved - On May 23, 2025, between 17:40 UTC and 18:30 UTC public API and UI requests to read and write Git repository content were degraded and triggered user-facing 500 responses. On average, the error rate was 61% and peaked at 88% of requests to the service. This was due to the introduction of an uncaught fatal error in an internal service. A manual rollback was required which increased the time to remediate the incident.

We are working to automatically detect and revert a change based on alerting to reduce our time to detection and mitigation. In addition, we are adding relevant test coverage to prevent errors of this type getting to production.

May 23, 18:33 UTC
Update - API Requests is operating normally.
May 23, 18:33 UTC
Update - API Requests is experiencing degraded performance. We are continuing to investigate.
May 23, 18:26 UTC
Investigating - We are currently investigating this issue.
May 23, 18:21 UTC
May 22, 2025
Resolved - On May 22, 2025, between 07:06 UTC and 09:10 UTC, the Actions service experienced degradation, leading to run start delays. During the incident, about 11% of all workflow runs were delayed by an average of 44 minutes. A recently deployed change contained a defect that caused improper request routing between internal services, resulting in security rejections at the receiving endpoint. We resolved this by reverting the problematic change and are implementing enhanced testing procedures to catch similar issues before they reach production environments.
May 22, 09:17 UTC
Update - We've applied a mitigation which has resolved these delays.
May 22, 09:17 UTC
Update - Our investigation continues. At this stage GitHub Actions Jobs are being executed, albeit with delays to the start of execution in some cases.
May 22, 08:47 UTC
Update - We are continuing to investigate these delays.
May 22, 08:14 UTC
Update - We're investigating delays with the execution of queued GitHub Actions jobs.
May 22, 07:43 UTC
Investigating - We are investigating reports of degraded performance for Actions
May 22, 07:42 UTC
May 21, 2025
Resolved - A change to the webhooks UI removed the ability to add webhooks. The timeframe of this impact was between May 20th, 2025 20:40 UTC and May 21st, 2025 12:55 UTC. Existing webhooks, as well as adding webhooks via the API were unaffected. The issue has been fixed.
May 21, 09:00 UTC
May 20, 2025
Resolved - On May 20, 2025, between 18:18 UTC and 19:53 UTC, Copilot Code Completions were degraded in the Americas. On average the error rate was 50% of requests to the service in the affected region. This was due to a misconfiguration in load distribution parameters after a scale down operation.

We mitigated the incident by addressing the misconfiguration.

We are working to improve our automated failover and load balancing mechanisms to reduce our time to detection and mitigation of issues like this one in the future.

May 20, 20:02 UTC
Update - Copilot is operating normally.
May 20, 20:01 UTC
Update - We are experiencing degraded availability for Copilot Code Completions in the America’s.
We are working on resolving the issue.

May 20, 19:43 UTC
Investigating - We are investigating reports of degraded performance for Copilot
May 20, 19:37 UTC
Resolved - On May 20, 2025, between 12:09 PM UTC and 4:07 PM UTC, the GitHub Copilot service experienced degraded availability, specifically for the Claude Sonnet 3.7 model. During this period, the success rate for Claude Sonnet 3.7 requests was highly variable, down to approximately 94% during the most severe spikes. Other models remained available and working as expected throughout the incident.
The issue was caused by capacity constraints in our model processing infrastructure that affected our ability to handle the large volume of Claude Sonnet 3.7 requests.
We mitigated the incident by rebalancing traffic across our infrastructure, adjusting rate limits, and working with our infrastructure teams to resolve the underlying capacity issues. We are working to improve our infrastructure redundancy and implementing more robust monitoring to reduce detection and mitigation time for similar incidents in the future.

May 20, 16:08 UTC
Update - Copilot is operating normally.
May 20, 16:08 UTC
Update - The issues with our upstream model provider have been resolved, and Claude Sonnet 3.7 is once again available in Copilot Chat, VS Code and other Copilot products.

We will continue monitoring to ensure stability, but mitigation is complete.

May 20, 16:07 UTC
Update - We are continuing to work with our model providers on mitigations to increase the success rate of Sonnet 3.7 requests made via Copilot.
May 20, 14:59 UTC
Update - We’re still working with our model providers on mitigations to increase the success rate of Sonnet 3.7 requests made via Copilot.
May 20, 14:15 UTC
Update - We are experiencing degraded availability for the Claude Sonnet 3.7 model in Copilot Chat, VS Code and other Copilot products. This is due to an issue with an upstream model provider. We are working with them to resolve the issue.

Other models are available and working as expected.

May 20, 13:33 UTC
Investigating - We are investigating reports of degraded performance for Copilot
May 20, 13:10 UTC
May 19, 2025

No incidents reported.

May 18, 2025

No incidents reported.

May 17, 2025
Resolved - Between May 16, 2025, 1:21 PM UTC and May 17, 2025, 2:26 AM UTC, the GitHub Enterprise Importer service was degraded and experienced slow processing of customer migrations. Customers may have seen extended wait times for migrations to start or complete.

This incident was initially observed as a slowdown in migration processing. During our investigation, we identified that a recent change aimed at improving API query performance caused an increase in load signals, which triggered migration throttling. As a result, the performance of migrations was negatively impacted, and overall migration duration increased. In parallel, we identified a race condition that caused a specific migration to be repeatedly re-queued, further straining system resources and contributing to a backlog of migration jobs, resulting in accumulated delays. No data was lost, and all migrations were ultimately processed successfully.

We have reverted the feature flag associated with a query change and are working to improve system safeguards to help prevent similar race condition issues from occurring in the future.

May 17, 02:27 UTC
Update - We continue to see signs of recovery for GitHub Enterprise Importer migrations. Queue depth is decreasing and migration duration is trending toward normal levels. We will continue to monitor improvements.
May 17, 02:26 UTC
Update - We have identified the source of increased load and have started mitigation. Customers using the GitHub Enterprise Importer may still see extended wait times until recovery completes.
May 16, 22:33 UTC
Update - Investigations on the incident impacting GitHub Enterprise Importer continue. An additional contributing cause has been identified, and we are working to ship additional mitigating measures.
May 16, 20:36 UTC
Update - We have taken several steps to mitigate the incident impacting GitHub Enterprise Importer (GEI). We are seeing early indications of system recovery. However, customers may continue to experience longer migrations and extended queue times. The team is continuing to work on further mitigating efforts to speed up recovery.
May 16, 18:19 UTC
Update - We are continuing to investigate issues with the GitHub Enterprise Importer. Customers may experience slower migration processes and extended wait times.
May 16, 15:32 UTC
Update - We are investigating issues with the GitHub Enterprise Importer. Customers may experience slower migration processes and extended wait times.
May 16, 14:06 UTC
Investigating - We are currently investigating this issue.
May 16, 13:46 UTC
May 16, 2025
Resolved - On May 16th, 2025, between 08:42:00 UTC and 12:26:00 UTC, the data store powering the Audit Log API service experienced elevated latency resulting in higher error rates due to timeouts. About 3.8% of Audit Log API queries for Git events experienced timeouts. The data store team deployed mitigating actions which resulted in a full recovery of the data store’s availability.
May 16, 15:24 UTC
Update - We are investigating issues with the audit log. Users querying Git audit log data may observe increased latencies and occasional timeouts.
May 16, 10:22 UTC
Investigating - We are currently investigating this issue.
May 16, 09:22 UTC